55 research outputs found

    An Improved Affine Equivalence Algorithm for Random Permutations

    Get PDF
    In this paper we study the affine equivalence problem, where given two functions Fβƒ—,Gβƒ—:{0,1}nβ†’{0,1}n\vec{F},\vec{G}: \{0,1\}^n \rightarrow \{0,1\}^n, the goal is to determine whether there exist invertible affine transformations A1,A2A_1,A_2 over GF(2)nGF(2)^n such that Gβƒ—=A2∘Fβƒ—βˆ˜A1\vec{G} = A_2 \circ \vec{F} \circ A_1. Algorithms for this problem have several well-known applications in the design and analysis of Sboxes, cryptanalysis of white-box ciphers and breaking a generalized Even-Mansour scheme. We describe a new algorithm for the affine equivalence problem and focus on the variant where Fβƒ—,Gβƒ—\vec{F},\vec{G} are permutations over nn-bit words, as it has the widest applicability. The complexity of our algorithm is about n32nn^3 2^n bit operations with very high probability whenever Fβƒ—\vec{F} (or Gβƒ—)\vec{G}) is a random permutation. This improves upon the best known algorithms for this problem (published by Biryukov et al. at EUROCRYPT 2003), where the first algorithm has time complexity of n322nn^3 2^{2n} and the second has time complexity of about n323n/2n^3 2^{3n/2} and roughly the same memory complexity. Our algorithm is based on a new structure (called a \emph{rank table}) which is used to analyze particular algebraic properties of a function that remain invariant under invertible affine transformations. Besides its standard application in our new algorithm, the rank table is of independent interest and we discuss several of its additional potential applications

    An Algorithmic Framework for the Generalized Birthday Problem

    Get PDF
    The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function H:{0,1}β„“β†’{0,1}nH:\{0,1\}^{\ell} \rightarrow \{0,1\}^n (whose specification depends on the underlying problem) and an integer K>0K>0. The goal is to find KK distinct inputs to HH (denoted by {xi}i=1K\{x_i\}_{i=1}^{K}) such that βˆ‘i=1KH(xi)=0\sum_{i=1}^{K}H(x_i) = 0. Wagner\u27s K-tree algorithm solves the problem in time and memory complexities of about N1/(⌊log⁑KβŒ‹+1)N^{1/(\lfloor \log K \rfloor + 1)} (where N=2nN= 2^n). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for KK which is not a power of 2. In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by NikoliΔ‡ and Sasaki and also by Biryukov and Khovratovich) for all Kβ‰₯8K \geq 8 from T2M⌊log⁑KβŒ‹βˆ’1=NT^2M^{\lfloor \log K \rfloor -1} = N to T⌈(log⁑K)/2βŒ‰+1M⌊(log⁑K)/2βŒ‹=NT^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N, applicable for a large range of parameters. For example, for K=8K = 8 we improve the best previous tradeoff from T2M2=NT^2M^2 = N to T3M=NT^3M = N and for K=32K = 32 the improvement is from T2M4=NT^2M^4 = N to T4M2=NT^4M^2 = N. Next, we consider values of KK which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for K∈{6,7,14,15}K \in \{6,7,14,15\} we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for K=6K=6 the K-tree algorithm achieves T=M=N1/3T=M=N^{1/3}, whereas we obtain T=N1/3T=N^{1/3} and M=N1/6M=N^{1/6}. For K=14K=14, Wagner\u27s algorithm achieves T=M=N1/4T=M=N^{1/4}, while we obtain T=N1/4T=N^{1/4} and M=N1/8M=N^{1/8}. This gives the first significant improvement over the K-tree algorithm for small KK. Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art. Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms

    Side Channel Cube Attacks on Block Ciphers

    Get PDF
    In this paper we formalize the notion of {\it leakage attacks} on iterated block ciphers, in which the attacker can find (via physical probing, power measurement, or any other type of side channel) one bit of information about the intermediate state of the encryption after each round. Since bits computed during the early rounds can be typically represented by low degree multivariate polynomials, cube attacks seem to be an ideal generic key recovery technique in these situations. However, the original cube attack requires extremely clean data, whereas the information provided by side channel attacks can be quite noisy. To address this problem, we develop a new variant of cube attack which can tolerate considerable levels of noise (affecting more than 11\% of the leaked bits in practical scenarios). Finally, we demonstrate our approach by describing efficient leakage attacks on two of the best known block ciphers, AES (requiring about 2352^{35} time for full key recovery) and SERPENT (requiring about 2182^{18} time for full key recovery)

    Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2

    Get PDF
    At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time 2402^{40} using 4444 GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security. While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about 2452^{45}. The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block). In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2. For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about 213=81922^{13} = 8192 from 4444 GiB to about 4 MiB, while the time complexity remains 2402^{40}. Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop. For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by β„“\ell) and the time complexity. It improves upon the previous attack in the range of (roughly) ℓ≀7000\ell \leq 7000. Specifically, for β„“=1100\ell = 1100 the complexity of our attack is about 2542^{54}, while the previous one is not faster than the 2642^{64} brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of 512512 from 32 GiB to 64 MiB with no time complexity penalty. Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the kk-XOR problem)

    An Improved Algebraic Attack on Hamsi-256

    Get PDF
    Hamsi is one of the 1414 second-stage candidates in NIST\u27s SHA-3 competition. The only previous attack on this hash function was a very marginal attack on its 256-bit version published by Thomas Fuhr at Asiacrypt 20102010, which is better than generic attacks only for very short messages of fewer than 100100 32-bit blocks, and is only 2626 times faster than a straightforward exhaustive search attack. In this paper we describe a different algebraic attack which is less marginal: It is better than the best known generic attack for all practical message sizes (up to 44 gigabytes), and it outperforms exhaustive search by a factor of at least 512512. The attack is based on the observation that in order to discard a possible second preimage, it suffices to show that one of its hashed output bits is wrong. Since the output bits of the compression function of Hamsi-256 can be described by low degree polynomials, it is actually faster to compute a small number of output bits by a fast polynomial evaluation technique rather than via the official algorithm

    Fine-Grained Cryptanalysis: Tight Conditional Bounds for Dense k-SUM and k-XOR

    Get PDF
    An average-case variant of the kk-SUM conjecture asserts that finding kk numbers that sum to 0 in a list of rr random numbers, each of the order rkr^k, cannot be done in much less than r⌈k/2βŒ‰r^{\lceil k/2 \rceil} time. On the other hand, in the dense regime of parameters, where the list contains more numbers and many solutions exist, the complexity of finding one of them can be significantly improved by Wagner\u27s kk-tree algorithm. Such algorithms for kk-SUM in the dense regime have many applications, notably in cryptanalysis. In this paper, assuming the average-case kk-SUM conjecture, we prove that known algorithms are essentially optimal for k=3,4,5k= 3,4,5. For k>5k>5, we prove the optimality of the kk-tree algorithm for a limited range of parameters. We also prove similar results for kk-XOR, where the sum is replaced with exclusive or. Our results are obtained by a self-reduction that, given an instance of kk-SUM which has a few solutions, produces from it many instances in the dense regime. We solve each of these instances using the dense kk-SUM oracle, and hope that a solution to a dense instance also solves the original problem. We deal with potentially malicious oracles (that repeatedly output correlated useless solutions) by an obfuscation process that adds noise to the dense instances. Using discrete Fourier analysis, we show that the obfuscation eliminates correlations among the oracle\u27s solutions, even though its inputs are highly correlated

    Locality-Preserving Hashing for Shifts with Connections to Cryptography

    Get PDF
    Can we sense our location in an unfamiliar environment by taking a sublinear-size sample of our surroundings? Can we efficiently encrypt a message that only someone physically close to us can decrypt? To solve this kind of problems, we introduce and study a new type of hash functions for finding shifts in sublinear time. A function h:{0,1}nβ†’Znh:\{0,1\}^n\to \mathbb{Z}_n is a (d,Ξ΄)(d,\delta) {\em locality-preserving hash function for shifts} (LPHS) if: (1) hh can be computed by (adaptively) querying dd bits of its input, and (2) Pr⁑[h(x)β‰ h(xβ‰ͺ1)+1]≀δ\Pr [ h(x) \neq h(x \ll 1) + 1 ] \leq \delta, where xx is random and β‰ͺ1\ll 1 denotes a cyclic shift by one bit to the left. We make the following contributions. * Near-optimal LPHS via Distributed Discrete Log: We establish a general two-way connection between LPHS and algorithms for distributed discrete logarithm in the generic group model. Using such an algorithm of Dinur et al. (Crypto 2018), we get LPHS with near-optimal error of Ξ΄=O~(1/d2)\delta=\tilde O(1/d^2). This gives an unusual example for the usefulness of group-based cryptography in a post-quantum world. We extend the positive result to non-cyclic and worst-case variants of LPHS. * Multidimensional LPHS: We obtain positive and negative results for a multidimensional extension of LPHS, making progress towards an optimal 2-dimensional LPHS. * Applications: We demonstrate the usefulness of LPHS by presenting cryptographic and algorithmic applications. In particular, we apply multidimensional LPHS to obtain an efficient "packed" implementation of homomorphic secret sharing and a sublinear-time implementation of location-sensitive encryption whose decryption requires a significantly overlapping view

    Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications

    Get PDF
    In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called {\it dissection}, which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with rr independent nn-bit keys. All the previous error-free attacks required time TT and memory MM satisfying TM=2rnTM = 2^{rn}, and even if ``false negatives\u27\u27 are allowed, no attack could achieve TM<23rn/4TM<2^{3rn/4}. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of TMTM, such as T=24nT=2^{4n} time and M=2nM=2^{n} memory for breaking the sequential execution of r=7 block ciphers. The improvement ratio we obtain increases in an unbounded way as rr increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to improve rebound attacks on hash functions and to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as the well known knapsack problem

    Memory-Efficient Algorithms for Finding Needles in Haystacks

    Get PDF
    One of the most common tasks in cryptography and cryptanalysis is to find some interesting event (a needle) in an exponentially large collection (haystack) of N=2nN=2^n possible events, or to demonstrate that no such event is likely to exist. In particular, we are interested in finding needles which are defined as events that happen with an unusually high probability of p≫1/Np \gg 1/N in a haystack which is an almost uniform distribution on NN possible events. When the search algorithm can only sample values from this distribution, the best known time/memory tradeoff for finding such an event requires O(1/Mp2)O(1/Mp^2) time given O(M)O(M) memory. In this paper we develop much faster needle searching algorithms in the common cryptographic setting in which the distribution is defined by applying some deterministic function ff to random inputs. Such a distribution can be modelled by a random directed graph with NN vertices in which almost all the vertices have O(1)O(1) predecessors while the vertex we are looking for has an unusually large number of O(pN)O(pN) predecessors. When we are given only a constant amount of memory, we propose a new search methodology which we call \textbf{NestedRho}. As pp increases, such random graphs undergo several subtle phase transitions, and thus the log-log dependence of the time complexity TT on pp becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the O(1/p2)O(1/p^2) time complexity of the best previous algorithm in the full range of 1/N<p<11/N<p<1, and in particular it improves the previous time complexity by a significant factor of N\sqrt{N} for any pp in the range Nβˆ’0.75<p<Nβˆ’0.5N^{-0.75}<p< N^{-0.5}. When we are given more memory, we show how to combine the \textbf{NestedRho} technique with the parallel collision search technique in order to further reduce its time complexity. Finally, we show how to apply our new search technique to more complicated distributions with multiple peaks when we want to find all the peaks whose probabilities are higher than pp
    • …
    corecore